我们解决了对追踪预测的影响函数的高效计算,回到培训数据。我们提出并分析了一种新方法来加速基于Arnoldi迭代的反向Hessian计算。通过这种改进,我们实现了我们的知识,首次成功实施了影响功能,该函数的尺寸为全尺寸(语言和愿景)变压器模型,具有数十亿个参数。我们评估我们对图像分类和序列任务的方法,以百万次训练示例。我们的代码将在https://github.com/google-research/jax-influence上获得。
translated by 谷歌翻译
The performance of the Deep Learning (DL) models depends on the quality of labels. In some areas, the involvement of human annotators may lead to noise in the data. When these corrupted labels are blindly regarded as the ground truth (GT), DL models suffer from performance deficiency. This paper presents a method that aims to learn a confident model in the presence of noisy labels. This is done in conjunction with estimating the uncertainty of multiple annotators. We robustly estimate the predictions given only the noisy labels by adding entropy or information-based regularizer to the classifier network. We conduct our experiments on a noisy version of MNIST, CIFAR-10, and FMNIST datasets. Our empirical results demonstrate the robustness of our method as it outperforms or performs comparably to other state-of-the-art (SOTA) methods. In addition, we evaluated the proposed method on the curated dataset, where the noise type and level of various annotators depend on the input image style. We show that our approach performs well and is adept at learning annotators' confusion. Moreover, we demonstrate how our model is more confident in predicting GT than other baselines. Finally, we assess our approach for segmentation problem and showcase its effectiveness with experiments.
translated by 谷歌翻译
The availability of frequent and cost-free satellite images is in growing demand in the research world. Such satellite constellations as Landsat 8 and Sentinel-2 provide a massive amount of valuable data daily. However, the discrepancy in the sensors' characteristics of these satellites makes it senseless to use a segmentation model trained on either dataset and applied to another, which is why domain adaptation techniques have recently become an active research area in remote sensing. In this paper, an experiment of domain adaptation through style-transferring is conducted using the HRSemI2I model to narrow the sensor discrepancy between Landsat 8 and Sentinel-2. This paper's main contribution is analyzing the expediency of that approach by comparing the results of segmentation using domain-adapted images with those without adaptation. The HRSemI2I model, adjusted to work with 6-band imagery, shows significant intersection-over-union performance improvement for both mean and per class metrics. A second contribution is providing different schemes of generalization between two label schemes - NALCMS 2015 and CORINE. The first scheme is standardization through higher-level land cover classes, and the second is through harmonization validation in the field.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
This report summarises the outcomes of a systematic literature search to identify Bayesian network models used to support decision making in healthcare. After describing the search methodology, the selected research papers are briefly reviewed, with the view to identify publicly available models and datasets that are well suited to analysis using the causal interventional analysis software tool developed in Wang B, Lyle C, Kwiatkowska M (2021). Finally, an experimental evaluation of applying the software on a selection of models is carried out and preliminary results are reported.
translated by 谷歌翻译
Deep learning semantic segmentation algorithms have provided improved frameworks for the automated production of Land-Use and Land-Cover (LULC) maps, which significantly increases the frequency of map generation as well as consistency of production quality. In this research, a total of 28 different model variations were examined to improve the accuracy of LULC maps. The experiments were carried out using Landsat 5/7 or Landsat 8 satellite images with the North American Land Change Monitoring System labels. The performance of various CNNs and extension combinations were assessed, where VGGNet with an output stride of 4, and modified U-Net architecture provided the best results. Additional expanded analysis of the generated LULC maps was also provided. Using a deep neural network, this work achieved 92.4% accuracy for 13 LULC classes within southern Manitoba representing a 15.8% improvement over published results for the NALCMS. Based on the large regions of interest, higher radiometric resolution of Landsat 8 data resulted in better overall accuracies (88.04%) compare to Landsat 5/7 (80.66%) for 16 LULC classes. This represents an 11.44% and 4.06% increase in overall accuracy compared to previously published NALCMS results, including larger land area and higher number of LULC classes incorporated into the models compared to other published LULC map automation methods.
translated by 谷歌翻译
自动临床标题生成问题被称为建议模型,将额叶X射线扫描与放射学记录中的结构化患者信息结合在一起。我们将两种语言模型结合在一起,即表演 - 泰尔和GPT-3,以生成全面和描述性的放射学记录。这些模型的建议组合产生了文本摘要,其中包含有关发现的病理,其位置以及将每个病理定位在原始X射线扫描中的每个病理的2D热图。提出的模型在两个医学数据集(Open-I,Mimic-CXR和通用MS-Coco)上进行了测试。用自然语言评估指标测量的结果证明了它们对胸部X射线图像字幕的有效适用性。
translated by 谷歌翻译
我们提出了Sauron,这是一种过滤器修剪方法,它通过使用自动调整的层特异性阈值丢弃相应的过滤器来消除冗余特征图。此外,Sauron最大程度地减少了一个正规化术语,正如我们所显示的各种指标所显示的那样,促进了特征地图簇的形成。与大多数过滤器修剪方法相反,Sauron是单相,类似于典型的神经网络优化,需要更少的超参数和设计决策。此外,与其他基于群集的方法不同,我们的方法不需要预选簇的数量,而簇的数量是非平凡的,以确定和随着层的变化。我们在三个医学图像分割任务上评估了Sauron和三种最先进的过滤器修剪方法。在这个领域,过滤器修剪很少受到关注,并且可以帮助建立有效的医疗级计算机模型,这些计算机由于隐私考虑而无法使用云服务。索伦(Sauron)比竞争的修剪方法实现了具有更高性能和修剪率的模型。此外,由于Sauron在训练过程中除去过滤器,因此随着时间的推移,其优化加速了。最后,我们证明了Sauron-Prun的模型的特征地图是高度可解释的。 Sauron代码可在https://github.com/jmlipman/sauronunet上公开获得。
translated by 谷歌翻译
近年来,深度学习已成为遥感科学家最有效的计算机视觉工具之一。但是,遥感数据集缺乏培训标签,这意味着科学家需要解决域适应性问题,以缩小卫星图像数据集之间的差异。结果,随后训练的图像分割模型可以更好地概括并使用现有的一组标签,而不需要新的标签。这项工作提出了一个无监督的域适应模型,该模型可在样式转移阶段保留图像的语义一致性和每个像素质量。本文的主要贡献是提出了SEMI2I模型的改进体系结构,该模型显着提高了所提出的模型的性能,并使其与最先进的Cycada模型具有竞争力。第二个贡献是在遥感多波段数据集(例如Worldview-2和Spot-6)上测试Cycada模型。提出的模型可在样式传递阶段保留图像的语义一致性和每个像素质量。因此,与SEMI2I模型相比,经过适应图像的训练的语义分割模型显示出可观的性能增长,并达到与最先进的Cycada模型相似的结果。所提出方法的未来开发可能包括生态领域转移,{\ em先验}对数据分布的质量评估,或探索域自适应模型的内部体系结构。
translated by 谷歌翻译
同质性是描述边缘连接相似节点的趋势的图形属性。相反称为异性。尽管同质性对于许多现实世界网络是自然的,但也有没有此属性的网络。人们通常认为,标准消息的图形神经网络(GNNS)在非双性图形上表现不佳,因此此类数据集需要特别注意。尽管为异性图开发图表的学习方法已经付出了很多努力,但尚无普遍同意同质的措施。但是,在文献中使用了几种测量同质性的指标,但是,我们表明所有这些度量都有关键的缺点,以阻止不同数据集之间的同质级别比较。我们将理想的属性形式化,以进行适当的同质度量,并展示如何将有关分类绩效指标属性的现有文献与我们的问题联系起来。在这样做时,我们找到了一种措施,我们称调整后的同质性比现有同质措施更满足所需的特性。有趣的是,该措施与两个分类性能指标有关 - 科恩的kappa和马修斯相关系数。然后,我们超越了同质性的二分法,并提出了一种新的属性,我们称之为标签信息性(LI),该属性表征了邻居标签提供有关节点标签的信息的数量。从理论上讲,我们表明LI在具有不同数量的类和类大小平衡的数据集中相当。通过一系列实验,我们表明LI是对数据集上GNN的性能的更好预测指标,而不是同质性。我们证明了Li解释了为什么GNN有时可以在异性数据集上表现良好 - 这是文献中最近观察到的现象。
translated by 谷歌翻译